Dynamically Delayed Postdictive Completeness and Consistency in Learning
نویسندگان
چکیده
In computational function learning in the limit, an algorithmic learner tries to nd a program for a computable function g given successively more values of g, each time outputting a conjectured program for g. A learner is called postdictively complete i all available data is correctly postdicted by each conjecture. Akama and Zeugmann presented, for each choice of natural number δ, a relaxation to postdictive completeness: each conjecture is required to postdict only all except the last δ seen data points. This paper extends this notion of delayed postdictive completeness from constant delays to dynamically computed delays. On the one hand, the delays can be di erent for di erent data points. On the other hand, delays no longer need to be by a xed nite number, but any type of computable countdown is allowed, including, for example, countdown in a system of ordinal notations and in other graphs disallowing computable in nitely descending counts. We extend many of the theorems of Akama and Zeugmann and provide some feasible learnability results. Regarding fairness in feasible learning, one needs to limit use of tricks that postpone output hypotheses until there is enough time to think about them. We see, for polytime learning, postdictive completeness (and delayed variants): 1. allows some but not all postponement tricks, and 2. there is a surprisingly tight boundary, for polytime learning, between what postponement is allowed and what is not. For example: 1. the set of polytime computable functions is polytime postdictively completely learnable employing some postponement, but 2. the set of exptime computable functions, while polytime learnable with a little more postponement, is not polytime postdictively completely learnable! We have that, for w a notation for ω, the set of exptime functions is polytime learnable with w-delayed postdictive completeness. Also provided are generalizations to further, small constructive limit ordinals.
منابع مشابه
Completeness and consistency conditions for learning fuzzy rules
The completeness and consistency conditions were introduced in order to achieve acceptable concept recognition rules. In real problems, we can handle noise-aaected examples and it is not always possible to maintain both conditions. Moreover, when we use fuzzy information there is a partial matching between examples and rules, therefore the consistency condition becomes a matter of degree. In th...
متن کاملDifferent Learning Levels in Multiple-choice and Essay Tests: Immediate and Delayed Retention
This study investigated the effects of different learning levels, including Remember an Instance (RI), Remember a Generality (RG), and Use a Generality (UG) in multiple-choice and essay tests on immediate and delayed retention. Three-hundred pre-intermediate students participated in the study. Reading passages with multiple-choice and essay questions in different levels of learning were giv...
متن کاملLearning in an Inconsistent World: Rule Selection in Aq18 Learning in an Inconsistent World Rule Selection in Star/aq18 Learning in an Inconsistent World: Rule Selection in Star/aq18
In concept learning and data mining tasks, the learner is typically faced with a choice of many possible hypotheses generalizing the input data. If one can assume that training data contains no noise, then the primary conditions a hypothesis must satisfy are consistency and completeness with regard to the data. In real-world applications, however, data are often noisy, and the insistence on the...
متن کاملContaining the Semantic Explosion
The explosion of semantic data on the information web, and within digital philosophy, requires new techniques for organizing and linking these knowledge repositories. These must address concerns about consistency, completeness, maintenance, usability, and pragmatics, while reducing the cost of double experts trained both in ontology design and the target domain. Folksonomy approaches address co...
متن کاملLearning from Inconsistent and Noisy Data: The AQ18 Approach
In concept learning or data mining tasks, the learner is typically faced with a choice of many possible hypotheses characterizing the data. If one can assume that the training data are nois e-free, then the generated hypothesis should be complete and consistent with regard to the data. In real -world problems, however, data are often noisy, and an insistence on full completeness and consistency...
متن کامل